in AWS, deep learning AMI with source code, ubuntu, CUDA8. N. Virginia server was chosen.
%reset
# Check for a GPU
import tensorflow as tf
print(tf.test.gpu_device_name())
%%bash
pwd
In this project, you'll use generative adversarial networks to generate new images of faces.
You'll be using two datasets in this project:
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
data_dir = 'data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
# Note: because of a version problem with matplot lib, the following line of code doesn't work in this workspace.
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).
You'll build the components necessary to build a GANs by implementing the following functions below:
model_inputsdiscriminatorgeneratormodel_lossmodel_opttrainThis will check to make sure you have the correct version of TensorFlow and access to a GPU
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
image_width, image_height, and image_channels.z_dim.Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height,image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate=tf.placeholder(tf.float32, name='learning_rate')
return inputs_real, inputs_z, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
1) Leaky ReLU activation function helps with the gradient flow and alleviate the problem of sparse gradients (almost 0 gradients). Max pooling generates sparse gradients, which affects the stability of GAN training. That’s the reason, you chose not to use pooling.
2) You have used batch normalization to stabilize GAN training by reducing internal covariant shift. You can go to this link[http://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html] for further understanding Batch norm.
3) You have used Sigmoid as the activation function for the output layer which produces probability-like values between 0 and 1.
Improvement to make: 1) Use custom weight initialization. For example Xavier weight initialization to help converge faster by breaking symmetry or you can also use truncated_normal_initializer with stddev=0.02, which improve overall generated image quality, like in DCGAN paper.
2) Experiment with various values of alpha (slope of the leaky Relu as stated in DCGAN paper) between 0.06 and 0.18 and compare your results.
3) Experiment with dropout layers for discriminator, applying dropout will decrease hyper learning distrib. If discriminator end up dominating generator, we must reduce discriminator learning rate and increase dropout.
</font>
Ref: F. Chollet, "Deep Learning with Python" chapt 8.32".
Use:
dp_layer = tf.nn.dropout(l_relu_output, keep_prob =0.8) or
dp_layer = tf.layer.dropout(l_relu_output, rate =0.2)
#https://discussions.udacity.com/t/discriminator-variable-scope-reuse/662468
depth = 64 #==filters=the dimensionality of the output space(i.e. the number of filters in the convolution).
kernel_size=5 # kernel_size=height and width of the 2D convolution window
strides=2
alpha=0.06 #if this is not within this variable_scope, can not pass test: assert mock_variable_scope.call_args == mock.call('discriminator', reuse=True)
#tf.variable_scope called with wrong arguments in Discriminator Inference(reuse=True)
#Reviewer#1: Experiment with various values of alpha (slope of the leaky Relu as stated in DCGAN paper) between 0.06 and 0.18 and compare your results.
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
with tf.variable_scope('discriminator', reuse=reuse) :
# Hidden layer #1: Input layer is 28x28x3
h1 = tf.layers.conv2d(images, filters=depth, kernel_size = kernel_size, strides=strides, padding='same', \
kernel_initializer=tf.contrib.layers.xavier_initializer())
relu1 = tf.maximum(alpha * h1, h1)
relu1_dropout = tf.nn.dropout(relu1, keep_prob =0.8)
# 14x14x64 now
# Hidden layer #2:
h2 = tf.layers.conv2d(relu1_dropout, filters=depth*2, kernel_size = kernel_size, strides=strides, padding='same',\
kernel_initializer=tf.contrib.layers.xavier_initializer())
bn2 = tf.layers.batch_normalization(h2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
relu2_dropout = tf.nn.dropout(relu2, keep_prob =0.8)
# 7x7x128 now
# Hidden layer #3:
h3 = tf.layers.conv2d(relu2, filters=depth*4, kernel_size = kernel_size, strides=strides, padding='same',\
kernel_initializer=tf.contrib.layers.xavier_initializer())
bn3 = tf.layers.batch_normalization(h3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
relu3_dropout = tf.nn.dropout(relu3, keep_prob =0.8)
# 3x3x256 now
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*depth*4))
logits = tf.layers.dense(flat, 1)
isFakeReal = tf.nn.sigmoid(logits)
return isFakeReal, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
You have used Tanh as the last layer of the generator output, so you will normalize the input images to be between -1 and 1 in train function.
You have met the basic requirements, but I recommend you to work on the below tips and comment on the improvements you see in the generated image.
1) Experiment with more conv2d_transpose layers in generator block so that there're enough parameters in the network to learn the concepts of the input images. DCGAN models produce better results when generator is bigger than discriminator.
Suggestion: 1024->512->256->128->out_channel_dim (Use stride as 1 to increase the number of layers without changing the size of the output image). ==> instead, first input depth is increased from 1024 to 2048
2) Experiment with different slope values for leaky_relu as told in discriminator.
3) Experiment dropout in generator, so that it is less prone to learning the data distribution and avoid generating images that look like noise.
(CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC)
</font>
import math
z=100
int(math.sqrt(z))
reshape_d=5
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
with tf.variable_scope('generator', reuse=not is_train):
#Using the reverse process of the discriminator
# First fully connected layer #1:
h1 = tf.layers.dense(z, reshape_d*reshape_d*1024) # First fully connected layer, 256
#(?, 256) now
#Input to reshape is a tensor with z*56 values, but the requested shape requires a multiple of reshape_d**2 * 1024
#Input to reshape is a tensor with 8192 values, but the requested shape requires a multiple of 25600
print("z=", z)
h1 = tf.reshape(h1, (-1, reshape_d, reshape_d, 1024)) #this 1024 is my choice upon paper. reshape_d comes from
h1 = tf.layers.batch_normalization(h1, training=is_train)
h1 = tf.maximum(alpha * h1, h1)
h1_dropout = tf.nn.dropout(h1, keep_prob =0.8)
#(?, 7,7,1024) now
# Hidden layer #2:
h2 = tf.layers.conv2d_transpose(h1_dropout, 512, kernel_size = kernel_size, \
strides=2, padding='same', \
kernel_initializer=tf.contrib.layers.xavier_initializer())
print("h2 (?, , , 512)=", h2.get_shape())
h2 = tf.layers.batch_normalization(h2, training=is_train)
h2 = tf.maximum(alpha * h2, h2)
h2_dropout = tf.nn.dropout(h2, keep_prob =0.8)
#(?, 7, 7, 512)
# Hidden layer #3:
h3 = tf.layers.conv2d_transpose(h2_dropout, 256, kernel_size = kernel_size, \
strides=2, padding='same', \
kernel_initializer=tf.contrib.layers.xavier_initializer())
print("h3 (?, , , 256)=", h3.get_shape()) #
h3 = tf.layers.batch_normalization(h3, training=is_train)
h3 = tf.maximum(alpha * h3, h3)
h3_dropout = tf.nn.dropout(h3, keep_prob =0.8)
#(?, 7, 7, 256)
# Hidden layer #4:
h4 = tf.layers.conv2d_transpose(h3_dropout, 128, kernel_size = kernel_size, \
strides=1, padding='valid',\
kernel_initializer=tf.contrib.layers.xavier_initializer())
print("h4 (?, , , 128)=", h4.get_shape())
h4 = tf.layers.batch_normalization(h4, training=is_train)
h4 = tf.maximum(alpha * h4, h4)
h4_dropout = tf.nn.dropout(h4, keep_prob =0.8)
#(?, 14, 14, 128)
#final layer #5:
logits = tf.layers.conv2d_transpose(h4_dropout, out_channel_dim, kernel_size = kernel_size, \
strides=1, padding='valid',\
kernel_initializer=tf.contrib.layers.xavier_initializer())
print("out_channel_dim=",out_channel_dim)
print("logits (?, 28, 28, 5) =", logits.get_shape())
fake_image = tf.tanh(logits)
#(?, 28, 28, 5)
return fake_image
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
discriminator(images, reuse=False)generator(z, out_channel_dim, is_train=True)Experiment with label smoothing for discriminator loss, it prevents discriminator from being too strong and to generalize in a better way. Refer https://arxiv.org/abs/1606.03498
Below is a starter code,
d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * (1 - smooth)))
smooth=0.1
def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
# Initializing Variables with predictions
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate the Losses for it real, fake & generated models
#label smoothing for discriminator loss, it prevents discriminator from being too strong
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \
labels=tf.ones_like(d_model_real)*(1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
# The Discriminator Loss is the amount of it fake img loss + it real img loss
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
To avoid internal covariant shift during training, you use batch norm. But in tensorflow **when is_train is true and you have used batch norm, mean and variance needs to be updated before optimization**. So, you add control dependency on the update ops before optimizing the network.
More Info here
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
# Empty TF TrainableVariables
t_vars = tf.trainable_variables()
# Append var to D & G vars arraies if it starts with the "prefix" for on it name
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# optimization
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
# UPDATE OPS GraphKeys:Ensures that we execute the update_ops before performing \
# the train_step. Ref:http://ruishu.io/2016/12/27/batchnorm/
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
#print("update_ops=", update_ops, type(update_ops))
#Tensor("generator/batch_normalization/AssignMovingAvg:0", shape=(1024,), dtype=float32_ref)
# type: <class 'tensorflow.python.framework.ops.Tensor'>
# ensor("generator/batch_normalization/AssignMovingAvg:0", shape=(1024,), dtype=float32_ref) \
#generator/batch_normalization/AssignMovingAvg:0 <class 'str'>
g_updates = [opt for opt in update_ops if opt.name.startswith('generator')]
with tf.control_dependencies(g_updates):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Implement train to build and train the GANs. Use the following functions you implemented:
model_inputs(image_width, image_height, image_channels, z_dim)model_loss(input_real, input_z, out_channel_dim)model_opt(d_loss, g_loss, learning_rate, beta1)Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
Great work combining all the functions together and making it a DCGAN.
Good job scaling the input images to the same scale as the generated ones using batch_images *= 2.0 .
Tip:
Execute the optimization for generator twice. This ensures that the discriminator loss does not go to 0 and impede learning.
Extra:
1) Talk on “How to train a GAN” by one of the author of original DCGAN paper here..https://www.youtube.com/watch?v=X1mUN6dD8uE
2) Here is a post on Gan hacks, https://github.com/soumith/ganhacks
3) Plot discriminator and generator loss for better understanding. You can utilize the below code snippet to plot the loss graph to get a better understanding.
d,_ = sess.run(…)
g,_ = sess.run(…)
d_loss_vec.append(d)
g_loss_vec.append(g)
At the end, you can include the below code to plot the final array:
Discriminator_loss, = plt.plot(d_loss_vec, color='b', label='Discriminator loss')
Genereator_loss, = plt.plot(g_loss_vec, color='r', label='Generator loss')
plt.legend(handles=[ Discriminator_loss, Genereator_loss])
import matplotlib.pyplot as plt
%matplotlib inline
def train(epochs, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epochs: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
_, img_width, img_height, img_channels = data_shape # 28x28x3 or 28x28x1
real_input_img, z_input, lr = model_inputs(img_width, img_height, img_channels, z_dim)
d_loss, g_loss = model_loss(real_input_img, z_input, img_channels)
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
samples, losses = [], []
n_images = 25
print_every = 20
show_every = 100
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for batch_images in get_batches(batch_size):
# TODO: Train Model
steps += 1
batch_images *= 2.0
# Sample random "noise vector" for generator
z_sample = np.random.uniform(-1, 1, (batch_size, z_dim))
# Run optimizers
_ = sess.run(d_opt, feed_dict={real_input_img: batch_images, \
z_input: z_sample, \
lr: learning_rate})
_ = sess.run(g_opt, feed_dict={z_input: z_sample, \
lr: learning_rate})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({z_input: z_sample, \
real_input_img: batch_images})
train_loss_g = g_loss.eval({z_input: z_sample})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Showing the generator image output for every = show_every step
if steps % show_every == 0:
show_generator_output(sess, n_images, z_input, img_channels, data_image_mode)
return losses
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
The hyperparameters chosen are correct. You can further improve the quality of the generated image by experimenting with the parameters and the tips I provided in discriminator, generator and model loss. Below are a few extra tips on choosing the hyperparameters for starters...
Tips: 1) Try using different values of learning rate between 0.0002 and 0.0008, this DCGAN architectural structure remains stable within that range.
2) Experiment with different values of beta1 between 0.2 and 0.5 and compare your results. Here's a good post explaining the importance of beta values and which value might be empirically better. http://ruder.io/optimizing-gradient-descent/index.html#adam.
3) An important point to note is, batch size and learning rate are linked. If the batch size is too small then the gradients will become more unstable and would need to reduce the learning rate and vice versa. Start point for experimenting on batch size would be somewhere between 16 to 32.
Extra: You can also go through Population based training of neural networks, https://deepmind.com/blog/population-based-training-neural-networks/ it is a new method for training neural networks which allows an experimenter to quickly choose the best set of hyperparameters and model for the task.
batch_size = 32 # Reviewer#1 Start point for experimenting on batch size would be somewhere between 16 to 32.
z_dim = 128
learning_rate = 0.0002
beta1 = 0.5 # AdamOptimizer. Reviewer#1: Experiment with different values of beta1 between 0.2 and 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
losses=train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,\
mnist_dataset.shape, mnist_dataset.image_mode)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], color='b', label='Discriminator loss')
plt.plot(losses.T[1], color='r', label='Generator loss')
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], color='b', label='Discriminator loss')
plt.plot(losses.T[1], color='r', label='Generator loss')
plt.title("Training Losses")
ax.set_ylim(0,2.5)
plt.legend()
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
mnist_dataset.shape
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
If you want to generate varied face shapes, experiment with the value of z_dim (probably in the range 128 - 256).
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
losses=train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], color='b', label='Discriminator loss')
plt.plot(losses.T[1], color='r', label='Generator loss')
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], color='b', label='Discriminator loss')
plt.plot(losses.T[1], color='r', label='Generator loss')
plt.title("Training Losses")
ax.set_ylim(0,2.5)
plt.legend()
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
celeba_dataset.shape
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.